![]() ANALOG COMPOUNDS OF ARYL SPHINGOSINE 1-BICYCLIC PHOSPHATE, AND ITS PHARMACEUTICAL COMPOSITION
专利摘要:
electronic device having microphones with controllable front side gain and rear side gain. an electronic device is provided which has a rear side and a front side, a first microphone (420) which generates a first signal (421), and a second microphone (430) which generates a second signal (431). an automated balance controller (480) generates a balance signal (464) based on an imaging signal (485). a processor (450) processes the first and second signals (421, 431) to generate at least one beam-formed audio signal (452, 454), where an audio level difference between a front side gain and a front side gain back side of the beam-formed audio signal is controlled during processing based on the balance signal. 公开号:BR112012033220B1 申请号:R112012033220-1 申请日:2011-05-24 公开日:2022-01-11 发明作者:Robert A. Zurek;Kevin Bastyr;Plamen Ivanov;Joel A. Clark 申请人:Google Technology Holdings LLC; IPC主号:
专利说明:
TECHNICAL FIELD The present invention relates generally to electronic devices and more particularly to electronic devices with the ability to acquire spatial audio information. background Portable electronic devices that have multimedia capability have become more popular in recent times. Many of these devices include audio and video recording functionality that allows them to operate as portable, handheld audio-video (AV) systems. Examples of portable electronic devices that have such a capability include, for example, digital cordless cell phones and other types of wireless communication devices, personal digital assistants, digital cameras, video recorders, etc. Some portable electronic devices include one or more microphones that can be used to acquire audio information from an operator of the device and/or a subject being recorded. In some cases, two or more microphones are provided on different sides of the device with one microphone positioned to record the subject and the other microphone positioned to record the operator. However, since the operator is normally closer than the subject to the device's microphone(s), the audio level of an audio input received from the operator will often exceed the audio level of the subject being recorded. As a result, the operator will often be recorded at a much louder audio level than the subject unless the operator auto-adjusts their volume (eg, speak too quickly to avoid dominating the subject's audio level). This issue can be exacerbated on devices using omnidirectional microphone capsules. Therefore, it is desirable to provide improved electronic devices having the ability to acquire audio information from more than one source (e.g., subject and operator) that can be located on different sides of the device. It is also desirable to provide methods and systems in such devices to balance the audio levels of the two sources at appropriate audio levels independent of their distance from the device. Furthermore, other desirable features and aspects of the present invention will become apparent from the subsequent detailed description and appended claims, taken in combination with the accompanying drawings and the above technical field and background. Brief description of drawings A more complete understanding of the present invention can be derived by referring to the detailed description and claims when considered in combination with the following figures, wherein similar reference numerals refer to similar elements throughout the figures. Figure 1A is a front perspective view of an electronic device according to an exemplary implementation of the disclosed embodiments; Figure IB is a rear perspective view of the electronic device of Figure IA; Figure 2A is a front view of the electronic device of Figure 1A; Figure 2B is a rear view of the electronic device of Figure 1A; Fig. 3 is a schematic diagram of an electronic device microphone and video camera configuration according to some of the disclosed embodiments; Fig. 4 is a block diagram of an audio processing system of an electronic device according to some of the disclosed embodiments; Fig. 5A is an exemplary polar graph of a front-side oriented beamed audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments; Fig. 5B is an exemplary polar graph of a backside-oriented beamed audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments. Figure 5C is an exemplary polar graph of a front side oriented audio signal and a rear side oriented audio signal generated by the audio processing system in accordance with an implementation of some of the modalities revealed; Figure 5D is an exemplary polar graph of a front side oriented audio signal and a rear side oriented audio signal generated by the audio processing system according to another implementation of some of the modalities revealed; Fig. 5E is an exemplary polar graph of a front-side beamed audio signal and a rear-side beamed audio signal generated by the audio processing system in accordance with yet another implementation of some of the revealed modalities; Fig. 6 is a block diagram of an audio processing system of an electronic device in accordance with some of the other disclosed embodiments; Figure 7A is an exemplary polar graph of a front and rear oriented beamed audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments; Fig. 7B is an exemplary polar graph of a front and rear oriented beamed audio signal generated by the audio processing system in accordance with another implementation of some of the disclosed embodiments; Figure 7C is an exemplary polar graph of a front and rear oriented beamed audio signal generated by the audio processing system in accordance with yet another implementation of some of the disclosed embodiments; Fig. 8 is a schematic diagram of an electronic device microphone and video camera configuration according to some of the other disclosed embodiments; Fig. 9 is a block diagram of an audio processing system of an electronic device in accordance with some of the other disclosed embodiments; Fig. 10A is an exemplary polar graph of a left-front oriented beam-formed audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments; Fig. 10B is an exemplary polar graph of a front-right oriented audio signal generated by the audio processing system in accordance with an implementation of some of the other disclosed embodiments; Fig. 10C is an exemplary polar graph of a backside-oriented beamed audio signal generated by the audio processing system in accordance with an implementation of some of the other disclosed embodiments; Figure 10D is an exemplary polar graph of the front-facing beamed audio signal, the right-front-facing beamed audio signal, and the rear-facing beamed audio signal generated by the audio processing system when combined to generate a stereo-surround output in accordance with an implementation of some of the disclosed embodiments; Fig. 11 is a block diagram of an audio processing system of an electronic device according to some other disclosed embodiments; Fig. 12A is an exemplary polar graph of a front-left oriented audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments; Fig. 12B is an exemplary polar graph of a front-right oriented audio signal generated by the audio processing system in accordance with an implementation of some of the disclosed embodiments; Fig. 12C is an exemplary polar graph of the front side oriented audio signal and the right front side oriented audio signal when combined as a stereo signal according to an implementation of some of the disclosed modalities. , and Fig. 13 is a block diagram of an electronic device that may be used in an implementation of the disclosed embodiments. Detailed Description As used herein, the word "exemplary" means "to serve as an example, instance, or illustration." The following detailed description is merely exemplary in nature and is not intended to limit the invention or the application and uses of the invention. Any embodiment described herein as "exemplary" should not necessarily be construed as preferred or advantageous over other embodiments. All embodiments described in this Detailed Description are exemplary embodiments provided to enable persons skilled in the art to make or use the invention and not to limit the scope of the invention which is defined by the claims. Further, it is not intended to be bound by any express or implied theory presented in the prior art field, background, summary, or the following detailed description. Before describing in detail the embodiments according to the present invention, it should be noted that the embodiments reside primarily in an electronic device having a rear side and a front side, a first microphone which generates a first output signal and a second microphone that generates a second output signal. An automated balance controller is provided that generates a balance signal based on an imaging signal. A processor processes the first and second output signals to generate at least one beamformed audio signal, where an audio level difference between a front side gain and a rear side gain of the beamformed audio signal is controlled. during processing based on the balance signal. Before describing the electronic device with reference to Figures 3-13, an example of an electronic device and an operating environment will be described with reference to Figures 1A-2B. Figure 1A is a front perspective view of an electronic device 100 in accordance with an exemplary implementation of the disclosed embodiments. Figure 1B is a rear perspective view of electronic device 100. The perspective view in Figures 1A and 1B is illustrated with reference to an operator 140 of electronic device 100 who is recording a subject 150. Figure 2A is a front view of the electronic device 100. electronic device 100 and Figure 2B is a rear view of electronic device 100. Electronic device 100 can be any type of electronic device having multimedia recording capability. For example, electronic device 100 can be any type of portable electronic device with audio/video recording capability including a camcorder, a still view camera, a personal media recorder and player, or a portable wireless computing device. As used herein, the term "wireless computing device" refers to any portable computer or other hardware designed to communicate with an infrastructure device over an air interface over a wireless channel. A wireless computing device is "portable" and potentially mobile or "nomadic" meaning that the wireless computing device can physically move around, however at any given time it may be mobile or stationary. A wireless computing device can be any of several types of mobile computing devices, which include, without limitation, mobile stations (e.g., cell phones, mobile radios, mobile computers, handheld or laptop devices, and personal computers, personal digital assistants (PDAs), or the like), access terminals, subscriber stations, user equipment, or any other devices configured to communicate via wireless communications. The electronic device 100 has a housing 102, 104, a left-hand portion 101, and a right-hand portion 103 opposite the left-hand portion 101. The housing 102, 104 has a width dimension extending in a y-direction, a length dimension extending in an x direction, and a thickness dimension extending in a z direction (in and out of the page). The back side is oriented in a +z direction and the front side is oriented in a -z direction. Of course, as electronic device is reoriented, the designations of "right", "left", "width" and "length may change. The current designations are given for convenience. More specifically, the housing includes a rear housing 102 on the operator side or rear side of the apparatus 100, and a front housing 104 on the subject side or front side of the apparatus 100. The rear housing 102 and front housing 104 are assembled to form a housing for various components including a circuit board (not shown), a headset speaker (not shown), an antenna (not shown), a video camera 110, and a user interface 107 including microphones 120, 130, 170 that are coupled to the circuit board. The housing includes a plurality of ports for the video camera 110 and the microphones 120, 130, 170. Specifically, the rear housing 102 includes a first port for a rear-side microphone 120, and the front housing 104 has a second port for a front-side microphone 130. The first port and second port share an axis. The first microphone 120 is disposed along the axis and in/near the first port of the rear housing 102, and the second microphone 130 is disposed along the axis opposite the first microphone 120 and in/proximate to the second port of the front housing 104. Optionally, in some implementations, the front housing 104 of the apparatus 100 may include the third port on the front housing 104 for another microphone 170, and a fourth port for the video camera 110. The third microphone 170 is disposed in/near the third port. . Video camera 110 is positioned on the front side and thereby oriented in the same direction as front housing 104, opposite the operator, to allow images of the subject to be acquired as the subject is being recorded by the camera. An axis through the first and second ports may align with a center of a video frame of the video camera 110 positioned in the front housing. The left side portion 101 is defined by and shared between the rear housing 102 and the front housing 104, and oriented in a +y direction that is substantially perpendicular to the rear housing 102 and the front housing 104. The right hand portion 103 is opposite the left-hand portion 101 and is defined by and shared between the rear housing 102 and the front housing 104. The right-hand portion 103 is oriented in a -y direction that is substantially perpendicular to the rear housing 102 and the front housing 104. Figure 3 is a schematic diagram of a microphone and video camera configuration 300 of the electronic device in accordance with some of the disclosed embodiments. Configuration 300 is illustrated with reference to a Cartesian coordinate system and includes the relative locations of a rear-side microphone 220 with respect to a front-side microphone 230 and video camera 210. Microphones 220, 230 are located or oriented toward the rear. along a common z-axis and separated by 180 degrees along a line at 90 degrees and 270 degrees. The first physical microphone element 220 is on an operator or rear side of the handheld electronic device 100, and the second physical microphone element 230 is on the subject or front side of the electronic device 100. The y axis is oriented along a straight line. at zero and 180 degrees, and the x-axis is oriented perpendicular to the y-axis and the z-axis in an upward direction. Camera 210 is located along the y axis and points to the page in the z direction towards the subject on the front of the device as does the front side microphone 230. The subject (not shown) would be located in front of the side microphone. 230, and the operator (not shown) would be located behind the rear-side microphone 220. In this way the microphones are oriented such that they can capture audio or sound signals from the operator making the video and a subject as well. being recorded by the video camera 210. The physical microphones 220, 230 can be any known type of physical microphone elements including omnidirectional microphones, directional microphones, pressure microphones, pressure gradient microphones, or any other acoustic-to-electric transducer or sensor that converts sound into an audio signal. electric, etc. in one embodiment, where the physical microphone elements 220, 230 are omnidirectional physical microphone elements (OPMEs), will have omnidirectional polar patterns that sense/capture sound that enters more or less equally from all directions. In one implementation, the physical microphones 220, 230 may be part of a microphone array that is processed using beamforming techniques such as delay and sum (or delay and differentiation) to establish directional patterns based on outputs generated by the physical microphones 220. , 230. As will now be described with reference to Figures 4-5E, the gain of the rear side corresponding to the operator can be controlled and attenuated in relation to the gain of the front side of the subject so that the audio level of the operator does not dominate the audio level of the subject. subject. Fig. 4 is a block diagram of an audio processing system 400 of an electronic device 100 in accordance with some of the disclosed embodiments. Audio processing system 400 includes a microphone array that includes a first microphone 420 that generates a first signal 421 in response to incoming sound, and a second microphone 430 that generates a second signal 431 in response to incoming sound. These electrical signals are generally a voltage signal that corresponds to a sound pressure captured in the microphones. A first filter module 422 is designed to filter the first signal 421 to generate a first phase-delayed audio signal 425 (e.g., a phase-delayed version of the first signal 421), and a second filter module 432 is designed to filter the second signal 431 to generate a second in-phase delayed audio signal 435. Although the first filter module 42 and the second filter module 432 are illustrated as being separate from the processor 450, it is observed that in other implementations the first filter module 432 is shown to be separate from the processor 450. filtering 422 and second filtering module 432 may be implemented in processor 450 as indicated by dashed line rectangle 440. Automated balance controller 480 generates a balance signal 464 based on an imaging signal 485. Depending on the implementation, the imaging signal 485 can be provided from any of a number of different sources, as will be described in greater detail below. In one implementation, video camera 110 is coupled to automated balance controller 480. Processor 450 receives a plurality of input signals including the first signal 421, the first phase-delayed audio signal 425, the second signal 431, and the second phase-delayed audio signal 435. Processor 450 processes these input signals. 421, 425, 431, 435, based on balance signal 464 (and possibly based on other signals such as balance select signal 465 or an AC signal 462), to generate a beam-oriented audio signal front side 452 and a rear side oriented audio signal 454. As will be described below, balance signal 464 can be used to control an audio level difference between a front side gain of the formed audio signal front-side beamed 452 and a backside gain of the backside beamed audio signal 454 during beamform processing. This allows control of the audio levels of a subject-oriented virtual microphone relative to an operator-oriented virtual microphone. The beamform processing performed by processor 450 may be sum and delay processing, difference and delay processing, or any other known beamform processing technique for generating directional patterns based on microphone input signals. Techniques for generating such first order beam shapes are well known in the art and will not be described here. First-order beamforms are those that follow the form A+Bcos(θ) in their directional characteristics, where A and B are constants representing the omnidirectional and bidirectional components of the beam formed signal and θ is the angle of incidence of the wave. acoustics. In one implementation, the balance signal 464 may be used to determine a ratio of a first gain of the backside-oriented beamformed audio signal 454 to a second gain of the side-oriented beamformed audio signal. 452. In other words, the balance signal 464 will determine the relative weight of the first gain with respect to the second gain in such a way that sound waves emanating from an audio output on the front side are emphasized relative to other sound waves emanating from the front side. of a rear-side audio output during playback of the beam-formed audio signals 452, 454. The relative gain of the rear-facing beam-formed audio signal 454 with respect to the side-facing beam-formed audio signal 452 can be controlled during processing based on the balance signal 464. To do this, in one implementation, the gain of the beam-formed audio signal oriented to the rear side 454 and/or the gain of the front side oriented beam formed audio signal 452 may vary. For example, in one implementation, the rear and front portions are adjusted so that they are substantially balanced so that the operator audio will not overpower the subject audio. In one implementation, processor 450 may include a look-up table (LOT) that receives input signals and balance signal 464, and generates front-side oriented beamformed audio signal 452 and formed audio signal. beam oriented towards the rear 454. LUT is table of values which generates different signals 452, 454 depending on the values of balance signal 464. In another implementation, the processor 450 is designed to process an equation based on the input signals 421, 425, 431, 435 and the balance signal 464 to generate the front-side oriented beamed audio signal 452 and a rearward-oriented beamed audio 454. The equation includes coefficients for the first signal 421, the first phase-delayed audio signal 425, the second signal 431, and the second phase-delayed audio signal 435, and the values of these coefficients can be adjusted or controlled based on the balance signal 454 to generate a front side oriented beamform audio signal set at gain 452 and/or a rear side oriented beam formed audio signal set at gain 454 . Gain control examples will now be described with reference to Figures 5A-5E. Preliminarily, note that in any of the polar graphs described below, signal magnitudes are plotted linearly to show the directional or angular response of a specific signal. Furthermore, in the examples that follow, for the purpose of illustrating an example, it can be assumed that the subject is generally located at approximately 90° while the operator is located at approximately 270°. The directional patterns shown in Figures 5A-5E are slices through the directional response forming a plane as would be observed by a spectator who has located above the electronic device 100 of Figure 1 who is looking down, where the geometric z axis in Figure 3 corresponds to the 90°-270° line, and the y-axis in figure 3 corresponds to the 0°-180° line. Fig. 5A is an exemplary polar graph of a front side oriented beamed audio signal 452 generated by audio processing system 400 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 5A , the front side oriented beam formed audio signal 452 has a first order cardioid directional pattern that is oriented or points towards the subject in the -z direction, or in front of the device. This first-order directional pattern has a maximum at 90 degrees and has a relatively strong directional sensitivity to sound originating from the subject's direction. The front side oriented beamform audio signal 452 also has a 270 degree null that points toward the operator (in the +z direction) who is recording the subject, which indicates that there is little or no directional sensitivity to sound that originates from the operator's direction. Stated differently, the front side oriented beam formed audio signal 452 emphasizes sound waves emanating from the front of the device and has a null oriented towards the back of the device. Fig. 5B is an exemplary polar graph of a backside oriented beam formed audio signal 454 generated by audio processing system 400 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 5B, the rearward-oriented beamformed audio signal 454 also has a first-order cardioid directional pattern, but points or is oriented toward the operator in the +z direction behind the device, and has a maximum at 270 degrees. This indicates that there is strong directional sensitivity to sound originating from the operator's direction. The backside oriented beamform audio signal 454 also has a null (at 90 degrees) pointing toward the subject (in the -z direction) which indicates that there is little or no directional sensitivity to sound originating from the direction of the subject. subject. Stated differently, the rearward-oriented beam-formed audio signal 454 emphasizes sound waves emanating from behind the device and has a null oriented toward the front of the device. Although not illustrated in Figure 4, in some embodiments, the bundled audio signals 452, 454 can be combined into a single channel audio output signal that can be transmitted and/or recorded. For ease of illustration, both the responses of a front side oriented beam formed audio signal 452 and a rear side oriented beam formed audio signal 454 will be shown together, however it is noted that this is not necessarily intended indicating that the bundled audio signals 452, 454 must be combined. Fig. 5C is an exemplary polar graph of a front side oriented beam formed audio signal 452 and a rear side oriented beam formed audio signal 454-1 generated by the audio processing system 400 in accordance with a implementation of some of the revealed modalities. Compared to Figure 5B, the directional response of the operator's virtual microphone illustrated in Figure 5C has been attenuated relative to the directional response of the subject's virtual microphone to prevent the operator audio level from dominating the subject's audio level. These adjustments could be used in a situation where the subject is located at a relatively close distance from the electronic device 100 as indicated by the balance signal 464. Fig. 5D is an exemplary polar graph of a front side oriented beam formed audio signal 452 and a rear side oriented beam formed audio signal 454-2 generated by the audio processing system 400 in accordance with another implementation of some of the revealed modalities. Compared to Figure 5C, the directional response of the operator's virtual microphone illustrated in Figure 5D has been further attenuated relative to the directional response of the subject's virtual microphone to prevent the operator's audio level from dominating the subject's audio level. These adjustments could be utilized in a situation where the subject is located at a relatively medium distance away from the electronic device 100 as indicated by the balance signal 464. Fig. 5E is an exemplary polar graph of a front side oriented beam formed audio signal 452 and a rear side oriented beam formed audio signal 454-3 generated by the audio processing system 400 further in accordance with another implementation of some of the revealed modalities. Compared to Figure 5D, the directional response of the operator's virtual microphone illustrated in Figure 5E has been further attenuated relative to the directional response of the subject's virtual microphone to prevent the operator audio level from dominating the subject's audio level. These adjustments could be used in a situation where the subject is located at a relatively far distance from the electronic device 100 as indicated by the balance signal 464. Thus, Figures 5C-5E generally illustrate that the relative gain of the backside-oriented beamformed audio signal 454 with respect to the frontside-oriented beamformed audio signal 452 can be controlled or adjusted during processing based on in the balance signal 464. In this way the gain ratio of the first and second beamed audio signals 454, 454 can be controlled so that one does not dominate the other. In one implementation, the relative gain of the first beamformed audio signal 454 may be increased with respect to the gain of the second beamformed audio signal 454 so that the audio level corresponding to the operator is less than or equal to the audio level. corresponding to the subject (e.g., a ratio of subject audio level to operator audio level is greater than or equal to one). This is another way of adjusting the processing so that the operator's audio level will not overpower that of the subject. Although the beamformed audio signals 452, 454 shown in Fig. 5A through 5E are both beamformed patterns of first order cardioid directional beam shape that are either rear oriented or front oriented, those skilled in the art will recognize that the beamformed audio signals 452, 454 are not necessarily limited to having these specific types of first order cardioid directional patterns and are shown as illustrating an exemplary implementation. In other words, although the directional patterns are cardioid-shaped, this does not necessarily indicate that the beam-formed audio signals are limited to having a cardioid shape, and may have any other shape that is associated with the directional beam-shape patterns. of first order as a dipole, hypercardioid, supercardioid, etc. depending on the balance signal 464, the directional patterns can vary from a nearly cardioid beam shape to a nearly bidirectional beam shape, or from a nearly cardioid beam shape to a nearly omnidirectional beam shape. Alternatively a higher order directional beam shape could be used in place of the first order directional beam shape. Furthermore, although the beamed audio signals 452, 454 are illustrated as having cardioid directional patterns, it will be recognized by those skilled in the art that these are only mathematically ideal examples and that, in some practical implementations, these beamform patterns idealized will not necessarily be obtained. As noted above, balance signal 464, balance select signal 465, and/or AGC signal 462 can be used to control the audio level difference between a front side gain of the oriented beam formed audio signal. towards the front side 452 and a gain of the rear side of the beamed audio signal towards the rear side 454 during beamform processing. Each of these signals will now be described in greater detail for various implementations. Balance signal and examples of imaging control signals that can be used to generate the balance signal The imaging signal 485 used to determine the balance signal 464 may vary depending on the implementation. For example, in some embodiments, automated balance controller 480 can be a video controller (not shown) that is coupled to video camera 110, or it can be coupled to a video controller that is coupled to video camera 110. The imaging signal 485 sent to the automated balance controller 480 to generate the balance signal 464 may be determined from (or may be determined based on) one or more of (1) a zoom control signal to the image camera. video 110. (2) a focal length for the video camera 110 or (3) an angular field of view of a frame of video from the video camera 110. Any of these parameters can be used individually or in combination with each other to generate a balance signal 464. Balance signals based on zoom control In some implementations, the physical video zoom of the video camera 110 is used to determine or adjust the audio level difference between the front side gain and the rear side gain. In this way, the video zoom control can be linked to a corresponding "audio zoom". In most modes, a narrow zoom (or high zoom value) can be assumed to relate a far distance between the subject and the operator, while a wide zoom (or low zoom value) can be assumed to relate to a closest distance between subject and operator. As such, the audio level difference between the front side gain and the rear side gain increases as the zoom control signal is increased or as the angled field of view is narrowed. By contrast, the audio level difference between the front side gain and the rear side gain decreases as the zoom control signal is decreased or as the angled field of view is widened. In one implementation, the audio level difference between the front side gain and the rear side gain can be determined from a lookup table for a specific value of the zoom control signal. In another implementation, the audio level difference between the front side gain and the rear side gain can be determined from a function referring to the value of a zoom-to-distance control signal. In some embodiments, balance signal 464 may be a zoom control signal to video camera 110 (or may be derived based on a zoom control signal to video camera 110 that is sent to video camera 110 controller). automated balancing 480). The zoom control signal can be a digital zoom control signal that controls an apparent angle of view of the video camera or an analog/optical zoom control signal that controls lens position in the camera. In one implementation, pre-set first-order beamform values can be assigned to specific values (or ranges of values) of the zoom control signal to determine an appropriate mix of audio from subject to operator. In some embodiments, the zoom control signal to the video camera can be controlled by a user interface (UI). Any known video zoom UI methodology can be used to generate a zoom control signal. For example, in some embodiments, video zoom can be controlled by the operator via a pair of buttons, an oscillator control, virtual controls on the device display including a dragged selection of an area, by tracking by the operator's eyes, etc. Field-of-view and focal-length-based balance signals Focal length information from camera 110 to subject 150 may be obtained from a video controller for video camera 110 or any other distance determining circuitry in the device. As such, in other implementations, the focal length of the video camera 110 can be used to adjust the audio level difference between the front side gain and the rear side gain. In one implementation, the balance signal 464 may be a calculated focal length from the video camera 110 that is sent to the automated balance controller 480 by a video controller. In still other implementations, the audio level difference between the front side gain and the rear side gain can be adjusted based on the angular field of view of a video frame from the video camera 110 which is calculated and sent to the automated balance controller 480. Proximity-based balance signals In other implementations, the balance signal 464 may be based on the estimated, measured, or felt distance between the operator and the electronic device 100, and/or based on the estimated, measured, or felt distance between the subject and the electronic device 100. In some embodiments, electronic apparatus 100 includes proximity sensor(s) (infrared, ultrasonic, etc.), proximity detection circuitry, or other type of distance measuring device(s) (not shown) that may be the source of proximity information provided as the imaging signal 485. For example, a front-side proximity sensor may generate a front-side proximity sensor signal that corresponds to a first distance between a video subject 150 and the apparatus. 100, and a rear-side proximity sensor can generate a rear-side proximity sensor signal that corresponds to a second distance between a camera 110, operator 140, and apparatus 100. The imaging signal 485 sent to the automated balance controller 480 to generate balance signal 464 is based on front side proximity sensor signal and/or rear side proximity sensor signal. In one embodiment, the balance signal 464 may be determined from estimated, measured, or felt distance information that is indicative of distance between the electronic device 100 and a subject being recorded by the video camera 110. In another embodiment, the signal balance 464 may be determined from a ratio of first-distance information to second-distance information, where the first-distance information is indicative of the estimated, measured, or felt distance between the electronic device 100 and a subject 150 being recorded by the camera 110, and wherein the second distance information is indicative of an estimated, measured, or felt distance between the electronic apparatus 100 and an operator 140 of the video camera 110. In one implementation, the second distance (operator) information can be set as a fixed distance at which a camera operator is normally located (e.g. based on an average human holding the device in an intended usage mode). In such an embodiment, the automated balance controller 480 assumes that the camera operator is at a predetermined distance from the apparatus and generates a balance signal 464 to reflect that predetermined distance. In essence, this allows a fixed gain to be assigned to the operator because their distance would remain relatively constant, and then the front side gain can be increased or decreased as needed. If the subject audio level exceeds the audio system's available level, the subject audio level will be set close to maximum and the operator audio level will be attenuated. In another implementation, pre-set first-order beamform values can be assigned specific values of distance information. Balance selection sign As noted above, in some implementations, the automated balance controller 480 generates a balance select signal 465 which is processed by processor 450 along with input signals 421, 425, 431, 435 to generate the bundled audio signal. front side oriented 452 and the rear side oriented beamform audio signal 454. In other words, balance select signal 465 can also be used during beamform processing to control an audio level difference between the gain of the front side of the front side oriented beamformed audio signal 452 and the back side gain of the rear side oriented beam formed audio signal 454. The balance selection signal 465 can orient the processor 450 to adjust the audio level difference in a relative mode (e.g. the ratio of front side gain to rear side gain) or a direct mode (e.g. attenuate gain of the rear side by a given value, or increase the gain of the front side by a given value). In one implementation, the 465 balance select signal is used to adjust the audio level difference between the front side gain and rear side gain to a predetermined value (e.g. X dB difference between the front side gain front and the gain on the rear side). In another implementation, the front side gain and/or the rear side gain can be adjusted to a predetermined value during processing based on the balance select signal 465. Automatic gain control feedback signal The 460 Automatic Gain Control (AGC) module is optional. The AGC module 460 receives the front side oriented beamformed audio signal 452 and the rear side oriented beam formed audio signal 454, and generates a feedback signal from AGC 462 based on signals 452, 454. Depending on the implementation, the AGC feedback signal 462 may be used to adjust or modify the balance signal 464 itself, or alternatively, it may be used in combination with the balance signal 464 and/or the balance select signal 465 to adjust gain of the front side oriented beamformed audio signal 452 and/or the rear side oriented beam formed audio signal 454 that is generated by the processor 450. The AGC 462 feedback signal is used to maintain a time-mediated ratio of subject audio level to operator audio level substantially constant regardless of changes in distance between the subject/operator and electronic device 100, or changes in levels. effective audio ratios of the subject and operator (for example, if the subject or operator starts yelling or whispering. In a specific implementation, the time-mediated ratio of subject to operator increases as the video is zoomed in (for (e.g., as the value of the zoom control signal changes.) In another implementation, the audio level of the rearward-oriented beamformed audio signal 454 is retained at a constant time-averaged level independent of the level of audio from the front side oriented beam formed audio signal 452. Fig. 6 is a block diagram of an audio processing system 600 of an electronic device 100 in accordance with some of the disclosed embodiments. Figure 6 is similar to Figure 4 and so the common features of Figure 4 will not be described again for the sake of brevity. This embodiment differs from Fig. 4 in that system 600 transmits a single-beamed audio signal 652 that includes operator and subject audio. More specifically, in the embodiment illustrated in Figure 6 , the various input signals provided to processor 650 are processed, based on balance signal 664, to generate a single beamed audio signal 652 in which an audio level difference between a front side gain of a front side oriented lobe 652-A (Figure 7) and a back side gain of a rear side oriented lobe 652-B (Figure 7) of the beamed audio signal 652 are controlled during processing based on balance signal 664 (and possibly based on other signals such as balance select signal 665 and/or AGO signal 662). The relative gain of the rear-facing lobe 652-B with respect to the front-facing lobe 652-A can be controlled or adjusted during processing based on the balance signal 664 to adjust a ratio of the gains of each lobe. In other words, the maximum gain value of the major lobe 652-A and the maximum gain value of the secondary lobe 652-B form a ratio that reflects a desired ratio of the subject audio level to the operator audio level. In this way, the beamed audio signal 652 can be controlled to emphasize sound waves emanating from the front of the device with respect to sound probes emanating from behind the device. In one implementation, the beamform of the bundled audio signal 652 emphasizes the front-side audio level and/or de-emphasizes the rear-side audio level such that a processed version of the front-side audio level is at least equal to a processed version of the backside audio level. Any of the balance signals 664 described above may also be used in this embodiment. Gain control examples will now be described with reference to Figures 7A-7C. The directional patterns shown in Figures 7A-7C are a flat horizontal slice through the directional response as would be observed by the viewer who located above the electronic device 100 of Figure 1 who is looking down, where the geometric z axis in Figure 3 corresponds to the line 90°-270°, and the y-axis in figure 3 corresponds to the line 0o - 180°. Fig. 7A is an exemplary polar graph of a front and rear oriented beam formed audio signal 652-1 generated by audio processing system 600 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 7A, the front and rear oriented beamform audio signal 652-1 has a first order directional pattern with a larger front oriented lobe 652-1A that is oriented or points toward the subject in the z-direction or in front of the device, and with a smaller lobe oriented towards the rear 652-1B that points or is oriented towards the operator in the +z-direction behind the device, and has a maximum at 270 degrees. This first-order directional pattern has a maximum at 90 degrees and has a relatively strong directional sensitivity to sound originating from the subject's direction, and a reduced directional sensitivity to sound originating from the operator's direction. Stated differently, the 652-1 front and rear oriented beam-formed audio signal emphasizes sound waves emanating from the front of the device. Figure 7B is an exemplary polar graph of a front and rear oriented beam formed audio signal 652-2 generated by audio processing system 600 in accordance with another implementation of some of the disclosed embodiments. Compared to Figure 7A, the front-oriented major lobe 652-2A that is oriented or points toward the subject has increased in width, and the gain of the rear-side-oriented smaller lobe 652-2B that points or is oriented towards the operator decreased. This indicates that the directional response of the operator's virtual microphone illustrated in Figure 7B has been attenuated relative to the directional response of the subject's virtual microphone to prevent the operator's audio level from dominating the subject's audio level. These adjustments could be utilized in a situation where the subject is located at a relatively far distance from the electronic device 100 than in Figure 7A as reflected in the balance signal 664. Fig. 7C is an exemplary polar graph of a front and rear oriented beamed audio signal 652-3 generated by audio processing system 600 in accordance with yet another implementation of some of the disclosed embodiments. Compared to Figure 7B, the front-facing major lobe 652-3A that is oriented or points toward the subject has increased even more in width, and the gain of the rear-facing smaller lobe 652-3B oriented toward the subject has increased even more in width. operator decreased even more. This indicates that the directional response of the operator's virtual microphone illustrated in Figure 7C has been further attenuated relative to the directional response of the subject's virtual microphone to prevent the operator's audio level from dominating the subject's audio level. These adjustments could be used in a situation where the subject is located at a relatively far distance from the electronic device 110 than in Figure 7B as reflected in the balance signal 664. The examples illustrated in Figures 7A-7C show the beamform responses of the front and rear oriented beamform audio signal 652 as the subject gets further away from the apparatus 100 as reflected in the balance signal 664. As the subject gets further away, the front-facing larger lobe 652-1A increases relative to the back-facing smaller lobe 652-1B, and the width of the front-facing larger lobe 652-1A increases. as the relative gain difference between the front-oriented larger lobe 652-1A and the rear-oriented smaller lobe 652-1B increases. In addition, Figures 7A-7C also generally illustrate that the relative gain of the front-facing larger lobe 652-1A with respect to the rear-facing smaller lobe 652-1B can be controlled or adjusted during processing based on the signal of balance 664. In this way, the ratio of gains of the front-oriented larger lobe 652-1A to the rear-oriented smaller lobe 652-1B can be controlled so that one does not dominate the other. As above, in one implementation, the relative gain of the front-facing larger lobe 652-1A can be increased with respect to the rear-facing smaller lobe 652-1B so that the audio level corresponding to the operator is lower or equal to the audio level corresponding to the subject (for example, a ratio of subject audio level to operator audio level is greater than or equal to one). In this way, the operator's audio level will not dominate that of the subject. Although the beamformed audio signal 652 shown in Fig. 7A through 7C is beamed with a first order directional beam pattern, those skilled in the art will recognize that the beamformed audio signal 652 is not necessarily limited. to a first-order directional pattern and which are shown as illustrating an exemplary implementation. Also, the first order directional beam pattern shown here has nulls to the sides and a directivity index between that of a bidirectional and cardioid, however the first order directional beam shape could have the same forward gain ratio. -rear and has a directivity index between that of a cardioid and an omnidirectional beam shape pattern resulting in no nulls to the sides. Furthermore, although the beamformed audio signal 652 is illustrated as having a mathematically ideal directional pattern, it will be recognized by those skilled in the art that these are examples only and that, in practical implementations, these idealized beamform patterns do not will necessarily be obtained. Figure 8 is a schematic diagram of an electronic device microphone and video camera 800 configuration in accordance with some of the other disclosed embodiments. As with Figure 3, configuration 800 is illustrated with reference to a Cartesian coordinate system. In Figure 8, the relative locations of a rear-side microphone 820, a front-side microphone 830, a third microphone 870, and a front-side video camera 810 are shown. The 820, 830 microphones are located or oriented along a common z-geometric axis and 180 degrees apart along a 90 degree and 270 degree line. The first physical microphone element 820 is on an operator or rear side of the portable electronic device 100, and the second physical microphone element 830 is on the front or subject side of the electronic device 100. The third microphone 870 is located along the y-axis is oriented along a line by approximately 180 degrees, and x-axis is oriented perpendicular to y-axis and z-axis in an upward direction. The video camera 810 is also located along the y axis and points the page in the -z direction towards the subject in front of the device as does the microphone 830. The subject (not shown) would be located in front of the microphone of the device. front side 830, and the operator (not shown) would be located behind the rear side microphone 820. In this way the microphones are oriented such that they can capture audio or sound signals from the operator making the video as well as from a subject being recorded by the 810 video camera. As in Figure 3, the physical microphones 820, 830, 870 described herein can be any known type of physical microphone elements including omnidirectional microphones, directional microphones, pressure microphones, pressure gradient microphones, etc. the 820, 830, 870 physical microphones can be part of a microphone array that is processed using beamforming techniques such as delay and sum (or delay and difference) to establish directional patterns based on outputs generated by the physical microphones 820, 830 , 870. As will now be described with reference to Figures 9-10D, the gain of the rear side of a virtual microphone element corresponding to the operator can be controlled and attenuated relative to the front left and right side gains of the virtual microphone elements corresponding to the subject of so that the operator audio level does not dominate the subject audio level. Furthermore, since the three mics allow directional patterns to be created at any angle in the yz plane, the front left and right virtual mic elements along with the rear side virtual mic elements can enable stereo or surround recordings. of the subject to be created while simultaneously allowing the operator's narration to be recorded. Fig. 9 is a block diagram of an audio processing system 900 of an electronic device 100 in accordance with some of the disclosed embodiments. Audio processing system 900 includes a microphone array that includes a first microphone 920 that generates a first signal 921 in response to incoming sound, a second microphone 930 that generates a second signal 931 in response to incoming sound, and a third microphone 970 which generates a third signal 971 in response to incoming sound. These output signals are generally an electrical signal (eg voltage) that correspond to a sound pressure captured at the microphones. A first filter module 922 is designed to filter the first signal 921 to generate a first phase-delayed audio signal 925 (e.g., a phase-delayed version of the first signal 921), a second filter module 932 is designed to filter the second electrical signal 931 to generate a second phase-delayed audio signal 935, and a third filter module 972 designed to filter the third electrical signal 971 to generate a third phase-delayed audio signal 975. As noted above with reference to the figure 4, although the first filter module 922, the second filter module 932 and the third filter module 972 are illustrated as being separate from the processor 950, it is observed that in other implementations the first filter module 922, the second filtering 932 and third filtering module 972 may be implemented in processor 950 as indicated by dashed line rectangle 940. Automated balance controller 980 generates a balance signal 964 based on an imaging signal 985 using any of the techniques described above with reference to Figure 4. As such, depending on the implementation, the imaging signal 985 can be provided from either from several different sources, as will be described in more detail above. In one implementation, the video camera 810 is coupled to the automated balance controller 980. Processor 950 receives a plurality of input signals including the first signal 921, the first phase-delayed audio signal 925, the second signal 931, the second phase-delayed audio signal 935, the third signal 971, and the third signal. 975. Processor 950 processes these input signals 921, 925, 931, 935, 971, 975 based on balance signal 964 (and possibly based on other signals such as balance select signal 965 or AGC signal 962), to generate a front left oriented audio signal 952, a right front oriented audio signal 954, and a right front oriented audio signal 956 which correspond to a left "subject" channel, a right "subject" channel, and a rear "operator" channel, respectively. As will be described below, balance signal 964 can be used to control an audio level difference between a left front side gain of the front side oriented beam formed audio signal 952, a oriented beam formed audio signal to the front right side 954, and a backside oriented beamformed audio signal 956 during beamform processing. This allows control of the audio levels of the subject's virtual microphones in relation to the operator's virtual microphone. The beamform processor performed by processor 950 may be realized using any known beamform processing technique to generate directional patterns based on microphone input signals. Figures 10A-B provide examples where the larger lobes are no longer oriented at 90 degrees, but at approximately 90 degree symmetrical angles. Of course, the larger lobes could be directed to other angles based on standard beamforming techniques. In this example, the null of each virtual microphone is centered 270 degrees to suppress signal coming from the operator at the back of the device. In one implementation, the balance signal 964 may be used to determine a ratio of a first gain of the rearward-facing beamformed audio signal 956 to a second gain of the larger lobe 952-A (Figure 10) of the front left oriented beamform audio signal 952, and a third major lobe gain 954-A (Figure 10) of the right front beam oriented audio signal 954. In other words, the balance 964 will determine the relative weight of the first gain with respect to the second gain and third gain in such a way that sound waves emanating from the front left side and front right side are emphasized with respect to other sound waves emanating from the rear side. The relative gain of the rear-facing beamformed audio signal 956 with respect to the left-front-facing beamformed audio signal 952 and the right-front-facing beamformed audio signal 954 can be controlled during processing based on balance signal 964. To do this, in one implementation, the first gain of the backside-oriented beamformed audio signal 956 and/or the second gain of the side-oriented beamformed audio signal front left 952, and/or the third gain of the right front beam oriented audio signal 954 may vary. For example, in one implementation, the rear gain and front gain are adjusted so that they are substantially balanced so that operator audio will not dominate over subject audio. In one implementation, processor 950 may include a lookup table (LUT) that receives input signals 921, 925, 931, 935, 971, 975 and balance signal 964, and generates the oriented beam formed audio signal. front-left 952, the front-right beam-formed audio signal 954, and the rear-facing beam-formed audio signal 956. In another implementation, the processor 950 is designed to process an equation based on the input signals 921, 925, 931, 935, 971, 975 and the balance signal 964 to generate the front left oriented beam-formed audio signal 952, the left-hand beam-oriented audio signal right front side 954, and the rear-facing beamformed audio signal 956. The equation includes coefficients for the first signal 921, the first phase-delayed audio signal 925, the second signal 931, the second audio signal delay in phase 935, the third sin al 971, and the third in-phase delayed audio signal 975, and the values of these coefficients can be adjusted or controlled based on the balance signal 964 to generate a front-left oriented beam-formed audio signal set at gain 952 , a front-right beam-oriented audio signal set at gain 954, and/or a rear-oriented beam-formed audio signal set at gain 956. Gain control examples will now be described with reference to Figures 10A-IOD. Similar to the other example graphs above, the directional patterns shown in Figures 10A-10D are a flat horizontal representation of the directional response as it would be observed by the viewer who located above the electronic device 100 of Figure 1 who is looking down, where the geometric axis z in figure 8 corresponds to the 90°-270° line, and the geometric axis y in figure 8 corresponds to the 0°-180° line. Fig. 10A is an exemplary polar graph of a front left oriented beam formed audio signal 952 generated by audio processing system 900 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 10A, the left-front oriented beamform audio signal 952 has a first-order directional pattern that is oriented or points toward the subject at an angle in front of the device between the +y direction and the - direction. z. In this specific example, the left front oriented beamed audio signal 952 has a first major lobe 952-A and a first minor lobe 952-B. the first major lobe 952-A is oriented to the left of the subject being recorded and has a left frontal gain. This first-order directional pattern has a maximum at approximately 150 degrees and has a relatively strong directional sensitivity to sound originating from a direction to the left of the subject towards apparatus 100. The beam-formed audio signal oriented towards the left front side 952 also has a 270 degree null that points towards the operator (in the +z direction) who is recording the subject, which indicates that there is reduced directional sensitivity to sound originating from the operator's direction. The front left oriented beamform audio signal 952 also has a 90 degree right null that points or is oriented toward the right side of the subject being recorded, which indicates that there is reduced directional sensitivity to sound originating from the direction on the subject's right side. Stated differently, the 952 front left oriented beam-formed audio signal emphasizes sound waves emanating from the front left and includes a null oriented toward the rear housing and operator. Fig. 10B is an exemplary polar graph of a front right oriented beam formed audio signal 954 generated by audio processing system 900 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 10B, the front-right oriented beam-formed audio signal 954 has a first-order directional pattern that is oriented or points toward the subject at an angle in front of the device between the -y direction and the - direction. z. In this specific example, the front right oriented beamed audio signal 954 has a second major lobe 954-A and a second minor lobe 954-B. the second major lobe 954-A has a right front side gain. In particular, this first-order directional pattern has a maximum at approximately 30 degrees and has a relatively strong directional sensitivity to sound originating from a direction to the subject's right toward apparatus 100. The beam-formed audio signal oriented toward the right front side 954 also has a 270 degree null that points toward the operator (in the +z direction) who is recording the subject, which indicates that there is reduced directional sensitivity to sound originating from the operator's direction. The front right oriented beamed audio signal 954 also has a 90 degree left null that is oriented toward the left side of the subject being recorded, which indicates that there is reduced directional sensitivity to sound originating from the direction to the left. subject's left side. Stated differently, the 954 front right oriented beam-formed audio signal emphasizes sound waves emanating from the front right and includes a null oriented toward the rear housing and operator. It will be recognized by those skilled in the art that these are examples only and that the angle of the maximum of the larger lobes may change based on the angular width of the video frame, however, nulls that remain at 270 degrees help cancel out the sound emanating from them. operator behind the device. Fig. 10C is an exemplary polar graph of a backside oriented beam formed audio signal 956 generated by audio processing system 900 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 10C, the rearward-oriented beamform audio signal 956 has a first-order cardioid directional pattern that points or is oriented behind the apparatus 100 toward the operator in the +z direction, and has a maximum at 270 degrees. The 956 rear-facing beam-formed audio signal has a rear-end gain, and relatively strong directional sensitivity to sound originating from the operator's direction. The rearward-oriented beamform audio signal 956 also has a null (at 90 degrees) that points toward the subject (in the -z direction), which indicates that there is little or no directional sensitivity to zoom originating from the direction of the subject. subject. Stated differently, the rearward-oriented beam-formed audio signal 956 emphasizes sound waves emanating from the rear of the housing and has a null oriented toward the front of the housing. Although not illustrated in Figure 9, in some embodiments, the bundled audio signals 952, 954, 956 may be combined into a single output signal that can be transmitted and/or recorded. Alternatively, the output signal can be a two-channel stereo signal or a multi-channel surround signal. Fig. 10D is an exemplary polar graph of the front left oriented audio signal 952, the right front oriented beam formed audio signal 954, and the rear oriented audio signal 956 -1 when combined to output a multichannel surround signal. Although the responses of the front-left oriented beam-formed audio signal 952, the front-right oriented beam-formed audio signal 954, and the rear-oriented beam-formed audio signal 956-1 are shown together in Fig. 10D, it is noted that this is not necessarily intended to indicate that the bundled audio signals 952, 954, 956-1 must be combined in all implementations. Compared to Fig. 10C , the gain of the rearward oriented beam formed audio signal 956-1 has decreased. As illustrated in Figure 10D, the directional response of the operator's virtual microphone illustrated in Figure 10C can be attenuated relative to the directional response of the subject's virtual microphones to prevent the operator audio level from dominating the subject audio level. The relative gain of the backside oriented beamformed audio signal 956-1 with respect to the frontside oriented beamformed audio signals 952, 954 can be controlled or adjusted during processing based on the balance signal 964 to consider the distance of the subject and/or operator away from the electronic device 100. In one implementation, the audio level difference between the right front side gain, the left front side gain and the rear side gain is controlled during processing based on the balance signal 964. By varying the gains of the virtual microphones based on the balance signal 964, the gain ratio of the audio signals formed in bundles 952, 954, 956 can be controlled so that one does not dominate the other. other. In each of the left front side oriented beamformed audio signal 952 and right front side oriented audio signal 954, a null can be focused on the rear (or operator) side to cancel operator audio. For a zero-output implementation, the rear-facing beamformed audio signal 956, which is oriented toward the operator, can be mixed with each output channel corresponding to the front-facing beamformed audio signal. 952 and front-right oriented audio signal 954) to capture operator narration. Although the beamformed audio signals 952, 954 shown in Fig. 10A and 10B have a specific first order directional pattern, and although the beamformed audio signal 956 is beamformed according to a directional beamform pattern rearward-oriented cardioid, those skilled in the art will recognize that the beamformed audio signals 952, 954, 956 are not necessarily limited to having the specific types of first-order directional patterns illustrated in Figures 10A-10D, and that these are shown as illustrating an exemplary implementation. Directional patterns can be generically any first order directional waveform patterns like cardioid, dipole, hypercardioid, supercardioid, etc. alternatively, higher order directional waveform patterns can be used. Furthermore, although the beam-formed audio signals 952, 954, 956 are illustrated as having mathematically ideal first-order directional patterns, it will be recognized by those skilled in the art that these are examples only and that, in practical implementations, these patterns of idealized waveforms will not necessarily be obtained. Fig. 11 is a block diagram of an audio processing system 1100 of an electronic device 100 in accordance with some of the disclosed embodiments. The audio processing system 1100 of Fig. 11 is nearly identical to that of Fig. 9 except that instead of generating three bundled audio signals, only two bundled audio signals are generated. The common features of figure 9 will not be described again for the sake of brevity. More specifically, processor 1150 processes input signals 1121, 1125, 1131, 1135, 1171, 1175 based on balance signal 1164 (and possibly based on other signals such as balance select signal 1165 or AGC signal 1162), to generate a front-left oriented beam-formed audio signal 1152 and a front-right oriented audio signal 1154 without generating a separate rear-oriented beam-formed audio signal (as in Fig. 9). This eliminates the need to sum/mix the front-left oriented beam-formed audio signal 1152 with a separate rear-oriented beam-formed audio signal, and the need to sum/mix the front right oriented beam 1154 with a separate rear oriented beamed audio signal. The directional patterns of the front left and right side virtual microphone elements that correspond to signals 1152, 1154 can be created at any angle in the yz plane to allow stereo recordings of the subject to be created while still allowing operator narration to be recorded. For example, instead of creating and mixing a separate operator beam shape with each subject channel, the front-left oriented audio signal 1152 and the front-right oriented audio signal 1152 1154 each capture half of the operator's desired audio level, and when heard in stereo playback would result in a proper audio level representation of the operator with a center image. In this embodiment, the left front side oriented beamed audio signal 1152 (FIG. 12A) has a first major lobe 1152-A having a front left side gain and a first minor lobe 1152-B having a rear side gain 270 degrees, and the front right oriented audio signal 1154 (Figure 12B) has a second major lobe 1154-A having a front right gain and a second minor lobe 1154-B having a gain of rear side by 270 degrees. The reason the gain comparison is now done on the larger lobes and at 270 degrees is that the 270 degree point refers to the operator's position. As we are primarily interested in the balance between the front subject signals and the rear operator signal, we look at the larger lobes and the operator location (which is assumed to be at 270 degrees). In this case, unlike that of figure 9, a null will not exist at 270 degrees. As will be described below, the balance signal 1164 can be used during beamform processing to control an audio level difference between the left front side gain of the first major lobe and the back side gain of the first minor lobe by 270 degrees, and control an audio level difference between the front right side gain of the second major lobe and the gain of the back side of the second minor lobe by 270 degrees. In this way, the front-side gain and rear-side gain of each virtual microphone element can be controlled and attenuated in relation to each other. A portion of the left front oriented beamformed audio signal 1152 assignable to the first minor lobe 1152-B and a portion of the right front oriented beamed audio signal 1154 assignable to the second minor lobe 1154-B will be perceptually summed by the user through normal hearing. This allows control of the audio levels of the subject's virtual microphones in relation to the operator's virtual microphone. The beamform processing performed by processor 1150 may be performed using any known beamform processing technique to generate directional patterns based on microphone input signals. Any of the techniques described above for controlling audio level differences can be adapted for use in this embodiment. In one implementation, balance signal 1164 can be used to control a ratio or relative weight of front side gain and rear side gain by 270 degrees for a specific of signals 1152, 1154, and for the sake of brevity those techniques will not be used. described again. Gain control examples will now be described with reference to Figures 12A-12C. Similar to the other example graphs above, the directional patterns shown in Figures 12A-12C are plane representations that would be observed by a viewer located above the electronic device 100 of Figure 1 who is looking down, where the geometric z axis in Figure 8 corresponds to the 90°-270° line, and the y-axis in figure 8 corresponds to the 0°-180° line. Fig. 12A is an exemplary polar graph of a front left oriented beam formed audio signal 1152 generated by audio processing system 1100 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 12A , the left front side oriented beam formed audio signal 1152 has a first order directional pattern that is oriented or points toward the subject at an angle in front of the device between the y direction and the -z direction. . In this specific example, the left front oriented beamed audio signal 1152 has a major lobe 1152-A and a minor lobe 1152-B. the major lobe 1152-A is oriented to the left of the subject being recorded and has a front-left gain, while the minor lobe 1152-B has a back-side gain. This first-order directional pattern has a maximum at approximately 137.5 degrees and has a relatively strong directional sensitivity to sound originating from a direction to the left of the subject towards apparatus 100. left front 1152 also has a 30 degree null that points or is oriented towards the right side of the subject being recorded, which indicates that there is reduced directional sensitivity to sound originating from the direction towards the subject's right side. The 1152-B minor lobe has exactly half the operator's desired sensitivity through 270 degrees to pick up an appropriate amount of signal from the operator. Fig. 12B is an exemplary polar graph of a front right oriented beam formed audio signal 1154 generated by audio processing system 1100 in accordance with an implementation of some of the disclosed embodiments. As illustrated in Figure 12B, the front right oriented beamform audio signal 1154 has a first-order directional pattern that is oriented or points toward the subject at an angle in front of the device between the -y direction and the - direction. z. In this specific example, the front right oriented beamed audio signal 1154 has a major lobe 1154-A and a minor lobe 1154-B. the 1154-A major lobe has a front right side gain and 1154-B minor lobe has a rear side gain. In particular, this first-order directional pattern has a maximum at approximately 45 degrees and has a relatively strong directional sensitivity to sound originating from a direction to the subject's right toward apparatus 100. The beam-formed audio signal oriented toward the right front side 1154 has a null at 150 degrees that is oriented toward the left side of the subject being recorded, which indicates that there is reduced directional sensitivity to sound originating from the direction to the subject's left side. The 1154-B smaller lobe has exactly half the desired operator sensitivity across 270 degrees to pick up an appropriate amount of signal from the operator. Although not illustrated in Fig. 11, in some embodiments, the bundled audio signals 1152, 1154 may be combined into a single audio stream or output signal that can be transmitted and/or recorded as a stereo signal. Fig. 12C is a polar graph of exemplary angular or "directional" responses of the left front-side beamformed audio signal 1152 and side-oriented beamformed audio signal 1154 generated by the audio processing system 1100 when combined as a stereo signal according to an implementation of some of the revealed modalities. Although the responses of the front-left oriented beam-formed audio signal 1152 and the front-right oriented audio signal 1154 are shown together in Fig. 12C, it is noted that this is not necessarily intended to indicate that the signals beamforms 1152, 1154 have to be combined in all implementations. By varying the lobe gains of the virtual microphones based on the balance signal 1164, the ratio of front-side gains and back-side gains of the beam-formed audio signals 1152, 1154 can be controlled so that one does not dominate the other. . As above, although the beamformed audio signals 1152, 1154 shown in Figures 12A and 12B have a specific first-order directional pattern, those skilled in the art will recognize that the specific types of directional patterns illustrated in Figures 12A-12C, for to illustrate exemplary implementation, and are not intended to be limiting. Directional patterns can generally have any first-order (or higher-order) directional beam-shape patterns, and in some practical implementations, these mathematically idealized beam-shape patterns may not necessarily be obtained. Although not explicitly described above, any of the modalities or implementations of balance signals, balance selection signals, and AGC signals that have been described above with reference to Figures 3-5E can all be applied equally in the modalities illustrated and described with reference to the Figures. 6-7C, figures 8-10D and figures 11-12C. Fig. 13 is a block diagram of an electronic device 1300 that may be used in an implementation of the disclosed embodiments. In the specific example illustrated in Figure 13, the electronic device is implemented as a wireless computing device, such as a mobile phone, which is capable of communicating over the air via a radio frequency (RF) channel. Wireless computing device 1300 comprises a processor 1301, a memory 1303 (including program memory for storing operating instructions that are executed by the processor 1301, a buffer memory, and/or a removable storage unit), a bandwidth processor base (BBP) 1305, an RF front end module 1307, an antenna 1308, a video camera 1310, a video controller 1312, an audio processor 1314, front and/or rear proximity sensors 1315, encoders/decoders of audio systems (CODECs) 1316, a display 1317, a user interface 1318 that includes input devices (keyboards, touch screens, etc.), a loudspeaker 1319 (that is, a loudspeaker used for listening by a device user 1300) and two or more microphones 1320, 1330, 1370. The various blocks can couple together as illustrated in Figure 13 through a bus or other connection. The 1300 wireless computing device may also contain a power source such as a battery (not shown), or wired transformer. Wireless computing device 1300 may be an integrated unit containing at least all of the elements shown in Figure 13, as well as any other elements necessary for wireless computing device 1300 to perform its specific functions. As described above, the microphones 1320, 1330, 1370 may operate in combination with the audio processor 1314 to allow acquisition of audio information originating on the front and rear sides of the wireless computing device 1300. The automated balance controller ( not illustrated in Figure 13) which is described above may be implemented in the audio processor 1314 or external to the audio processor 1314. The automated balance controller may utilize an imaging signal provided from one or more of the processor 1301, the video controller 1312, proximity sensors 1315, and user interface 1318 to generate a balance signal. Audio processor 1314 processes the output signals from microphones 1320, 1330, 1370 to generate one or more beam-formed audio signals, and controls an audio level difference between a front side gain and a rear side gain of one or more audio signals bundled during processing based on the balance signal. The other blocks in Figure 13 are conventional features in this exemplary operating environment, and therefore for the sake of brevity will not be described in detail here. It should be recognized that the exemplary embodiments described with reference to Figure 1-13 are not limiting and that other variations exist. It is also to be understood that various changes may be made without departing from the scope of the invention as set out in the appended claims and legal equivalents thereof. The embodiment described with reference to Figures 1-13 can be implemented in a wide variety of different implementations and different types of portable electronic devices. While it has been assumed that the backside gain should be reduced relative to the frontside gain (or that the frontside gain should be increased relative to the backside gain), different implementations could increase the backside gain by relative to the front side gain (or reduce the front side gain relative to the rear side gain). Those of skill will recognize that the various illustrative logic blocks, modules, circuits, and steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Some of the modalities and implementations are described above in terms of functional components and/or logical blocks (or modules) and various processing steps. However, it must be recognized that such block components (or modules) may be realized by any number of hardware, software and/or firmware components configured to perform the specified functions. As used herein the term "module" refers to a device, circuit, electrical component, and/or software-based component for performing a task. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generically in terms of their functionality. Whether such functionality is implemented as hardware or software depends on the specific application and design limitations imposed on the overall system. Specialized technicians can implement the described functionality in varying modes for each specific application, however such implementation decisions should not be interpreted as departing from the scope of the present invention. For example, one embodiment of a system or a component may employ various integrated circuit components, e.g. memory elements, digital signal processing elements, logic elements, lookup tables, or the like, which may perform a variety of functions. under the control of one or more microprocessors or other control devices. Furthermore, those skilled in the art will recognize that embodiments described herein are merely exemplary implementations. The various illustrative logic blocks, modules and circuits described with respect to the embodiments disclosed herein may be implemented or executed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a deposition of field programmable gate (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor can be a microprocessor, but alternatively, the processor can be any conventional processor, controller, microcontroller, or a state machine. A processor may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in combination with a DSP number, or any other configuration. The steps of a method or algorithm described with respect to the modalities disclosed herein may be incorporated directly into hardware, into a software module executed by a processor, or a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registry, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integrated into the processor. The processor and storage medium may reside in an ASIC. The ASIC can reside on a user terminal. In the alternative, the processor and storage medium may reside as separate components in a user terminal. Furthermore, the connecting lines or arrows shown in the various figures contained herein are intended to represent exemplary functional relationships and/or couplings between the various elements. Many alternative or additional functional relationships or couplings may be present in a practical embodiment. In this document, relational terms such as first and second, and the like, may be used exclusively to distinguish one entity or action from another entity or action without necessarily requiring or indicating any effective relationship or order between such entities or actions. Numerical ordinals like "first", "second", "third", etc. simply indicate individuals other than a plurality and do not indicate any order or sequence unless specifically defined by the language of the claim. The sequence of text in any of the claims does not indicate which process steps are to be performed in a temporal or logical order according to such sequence unless specifically defined by the language of the claim. Process steps may be interchanged in any order without departing from the scope of the invention as long as such interchange does not contradict the language of the claim and is not logically meaningless. Also, depending on the context, words like "connect" or "coupled to" used in describing a relationship between different elements do not indicate that a direct physical connection must be made between those elements. For example, two elements can be connected to each other physically, electronically, logically, or in any other way, through one or more additional elements. While at least one exemplary embodiment has been set forth in the above detailed description, it should be recognized that a vast number of variations exist. It should also be recognized that the exemplary embodiment or exemplary embodiments are exemplary only, and are not intended to limit the scope, applicability, or configuration of the invention in any way. Rather, the above detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or embodiments. It is to be understood that various changes may be made to the function and arrangement of elements without departing from the scope of the invention as set out in the appended claims and legal equivalents thereof.
权利要求:
Claims (20) [0001] 1. Electronic apparatus (100) having a rear side and a front side, characterized in that it comprises: a first microphone (820, 920) which generates a first signal (921); a second microphone (830, 930) that generates a second signal (931); a third microphone (870, 970) that generates a third signal (971); an automated balance controller (980) that generates a balance signal (964) based on an imaging signal (985); and a processor (950), coupled to the first microphone, the second microphone, the third microphone, and the automated balance controller, which processes the first signal, the second signal, and the third signal to generate: a beam-formed audio signal left front side (952) having a larger first lobe having a left front side gain, a right front side beamed audio signal (954) having a larger second lobe having a right front side gain, and a third A beam-formed audio signal (956) having a third gain from the rear side, wherein an audio level difference between the third gain from the rear side and both the front right gain and the front left gain are controlled with based on the balance signal (964). [0002] 2. Electronic device, according to claim 1, characterized in that it further comprises: a video camera (810) positioned on the front side and coupled to the automated balance controller. [0003] 3. Electronic device, according to claim 2, characterized in that the automated balance controller (980) comprises: a video controller coupled to the video camera (810). [0004] 4. Electronic device, according to claim 3, characterized in that the imaging signal (985) is based on an angular field of view of a video frame of the video camera (810). [0005] 5. Electronic device, according to claim 3, characterized in that the imaging signal (985) is based on a focal length for the video camera (810). [0006] 6. Electronic device according to claim 3, characterized in that the imaging signal (985) is a zoom control signal for the video camera (810) that is controlled by a user interface. [0007] 7. Electronic device according to claim 6, characterized in that the zoom control signal for the video camera (810) is a digital zoom control signal. [0008] 8. Electronic device according to claim 6, characterized in that the zoom control signal for the video camera (810) is an optical zoom control signal. [0009] 9. Electronic device, according to claim 1, characterized in that it further comprises: a front-side proximity sensor that generates a front-side proximity sensor signal that corresponds to a first distance between the video subject and the electronic device (100), wherein the imaging signal (985) is based on the front side proximity sensor signal. [0010] 10. Electronic device, according to claim 1, characterized in that it further comprises: a rear-side proximity sensor that generates a rear-side proximity sensor signal that corresponds to a second distance between a camera operator and the electronic device (100), wherein the imaging signal (985) is based on the backside proximity sensor signal. [0011] 11. Electronic device, according to claim 1, characterized in that it further comprises: a front-side proximity sensor that generates a front-side proximity sensor signal that corresponds to a first distance between a video subject and the electronic device (100); and a backside proximity sensor that generates a backside proximity sensor signal that corresponds to a second distance between a camera operator and the electronic device (100), wherein the imaging signal (985) is based on the front side proximity sensor signal and rear side proximity sensor signal. [0012] 12. Electronic device, according to claim 1, characterized in that the automated balance controller (980) generates a balance selection signal, in which at least one of the front side gain and back side gain of the hair minus one beam-formed audio signal are adjusted to a predetermined value based on the balance selection signal. [0013] 13. Electronic device according to claim 1, characterized in that the first microphone (820, 920) or the second microphone (830, 930) or the third microphone (870, 970) is an omnidirectional microphone. [0014] 14. Electronic device according to claim 1 (820, 920), characterized in that the first microphone or the second microphone (830, 930) or the third microphone (870, 970) is a directional microphone. [0015] 15. Electronic apparatus according to claim 1, characterized in that: a front right side beamed audio signal (954) also has a first minor lobe (954-B) having a first rear side gain lobe, where an audio level difference between the gain of the front right side of the second lobe and the gain of the back side of the first minor lobe is controlled based on the balance signal (964), where the audio signal Front left side beam (952) also has a smaller second lobe having another rear side gain, wherein an audio level difference between the left front side gain of the first larger lobe and the other rear side gain of the second minor lobe is controlled based on the balance signal (964), and wherein the first minor lobe and the second minor lobe form the third beamed audio signal. [0016] 16. Electronic device, according to claim 1, characterized in that it further comprises: an Automatic Gain Control (AGC) module (960), coupled to the processor (950), which receives at least one audio signal formed in beam (952, 954, 956), and generates an AGC feedback signal based on at least one beamed audio signal, wherein the AGC feedback signal is used to adjust the balance signal. [0017] 17. Electronic device, according to claim 1, characterized in that the processor comprises: a lookup table. [0018] 18. Electronic device, according to claim 1, characterized in that it further comprises at least one proximity sensor to measure a distance between an operator of the electronic device and the electronic device (100) and/or a distance between an individual in in front of the electronic device (100) and the electronic device (100) and provide distance information measured according to an imaging signal (985); [0019] 19. A method for processing a first microphone signal, a second microphone signal and a third microphone signal comprising: generating a balance signal (964) based on an imaging signal (985); and processing the first microphone signal (921), the second microphone signal (931), and the third microphone signal (971) to generate: a front left side beam-formed audio signal (952) having a first lobe larger having a gain on the front left side; a front right side beamed audio signal (954) having a second larger lobe having a front right side gain; a third beamed audio signal (956) having a third backside gain; where an audio level difference between the third gain on the rear side and both the front right gain and the front left gain are controlled based on the balance signal (964). [0020] 20. Method according to claim 19, characterized in that it additionally comprises, before the step of generating a balance signal (964): measuring a distance between an operator of an electronic device (100) and an electronic device (100). ) and/or a distance between an individual in front of the electronic device and the electronic device (100) and provide measured distance information as an imaging signal (985).
类似技术:
公开号 | 公开日 | 专利标题 BR112012033220B1|2022-01-11|ANALOG COMPOUNDS OF ARYL SPHINGOSINE 1-BICYCLIC PHOSPHATE, AND ITS PHARMACEUTICAL COMPOSITION US8433076B2|2013-04-30|Electronic apparatus for generating beamformed audio signals with steerable nulls US8638951B2|2014-01-28|Electronic apparatus for generating modified wideband audio signals based on two or more wideband microphone signals US9521500B2|2016-12-13|Portable electronic device with directional microphones for stereo recording US20210368248A1|2021-11-25|Capturing Sound US9247334B2|2016-01-26|Portable electronic device EP3202162B1|2020-11-25|Method to determine loudspeaker change of placement US20160057522A1|2016-02-25|Method and apparatus for estimating talker distance WO2010100873A1|2010-09-10|Speaker with camera, signal processing device, and av system KR20140021076A|2014-02-19|Angle-dependent operating device or method for obtaining a pseudo-stereophonic audio signal US10154363B2|2018-12-11|Electronic apparatus and sound output control method CN110035372B|2021-01-26|Output control method and device of sound amplification system, sound amplification system and computer equipment JP2007028134A|2007-02-01|Cellular phone WO2013094103A1|2013-06-27|Sound processing device, and sound processing method Christensen et al.2013|Measuring directional characteristics of in-ear recording devices US10820129B1|2020-10-27|System and method for performing automatic sweet spot calibration for beamforming loudspeakers US20210099809A1|2021-04-01|Microphone array assembly US11284203B2|2022-03-22|Microphone array assembly GB2575492A|2020-01-15|An ambisonic microphone apparatus
同族专利:
公开号 | 公开日 KR20130040929A|2013-04-24| EP2586217B1|2020-04-22| KR101490007B1|2015-02-04| BR112012033220A2|2016-11-16| WO2011162898A1|2011-12-29| EP2586217A1|2013-05-01| US8300845B2|2012-10-30| CN102948168B|2015-06-17| US8908880B2|2014-12-09| US20110317041A1|2011-12-29| CN102948168A|2013-02-27| US20130021503A1|2013-01-24|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US4334740A|1978-09-12|1982-06-15|Polaroid Corporation|Receiving system having pre-selected directional response| JPS5910119B2|1979-04-26|1984-03-07|Victor Company Of Japan| AT386504B|1986-10-06|1988-09-12|Akg Akustische Kino Geraete|DEVICE FOR STEREOPHONIC RECORDING OF SOUND EVENTS| JPH02206975A|1989-02-07|1990-08-16|Fuji Photo Film Co Ltd|Image pickup device with microphone| JP2687712B2|1990-07-26|1997-12-08|三菱電機株式会社|Integrated video camera| JP2500888B2|1992-03-16|1996-05-29|松下電器産業株式会社|Microphone device| US6041127A|1997-04-03|2000-03-21|Lucent Technologies Inc.|Steerable and variable first-order differential microphone array| US6507659B1|1999-01-25|2003-01-14|Cascade Audio, Inc.|Microphone apparatus for producing signals for surround reproduction| AT230917T|1999-10-07|2003-01-15|Zlatan Ribic|METHOD AND ARRANGEMENT FOR RECORDING SOUND SIGNALS| EP1202602B1|2000-10-25|2013-05-15|Panasonic Corporation|Zoom microphone device| KR100628569B1|2002-02-09|2006-09-26|삼성전자주식회사|Camcoder capable of combination plural microphone| US20030160862A1|2002-02-27|2003-08-28|Charlier Michael L.|Apparatus having cooperating wide-angle digital camera system and microphone array| JP4292795B2|2002-12-13|2009-07-08|富士フイルム株式会社|Mobile device with camera| JP4269883B2|2003-10-20|2009-05-27|ソニー株式会社|Microphone device, playback device, and imaging device| JP2005311604A|2004-04-20|2005-11-04|Sony Corp|Information processing apparatus and program used for information processing apparatus| US7970151B2|2004-10-15|2011-06-28|Lifesize Communications, Inc.|Hybrid beamforming| US8873768B2|2004-12-23|2014-10-28|Motorola Mobility Llc|Method and apparatus for audio signal enhancement| JP2006339991A|2005-06-01|2006-12-14|Matsushita Electric Ind Co Ltd|Multichannel sound pickup device, multichannel sound reproducing device, and multichannel sound pickup and reproducing device| US20080247567A1|2005-09-30|2008-10-09|Squarehead Technology As|Directional Audio Capturing| JP4931198B2|2006-09-27|2012-05-16|キヤノン株式会社|IMAGING DEVICE AND IMAGING DEVICE CONTROL METHOD| US8213623B2|2007-01-12|2012-07-03|Illusonic Gmbh|Method to generate an output audio signal from two or more input audio signals| US20090010453A1|2007-07-02|2009-01-08|Motorola, Inc.|Intelligent gradient noise reduction system| US8319858B2|2008-10-31|2012-11-27|Fortemedia, Inc.|Electronic apparatus and method for receiving sounds with auxiliary information from camera system| US20100123785A1|2008-11-17|2010-05-20|Apple Inc.|Graphic Control for Directional Audio Input|US9977561B2|2004-04-01|2018-05-22|Sonos, Inc.|Systems, methods, apparatus, and articles of manufacture to provide guest access| US11106425B2|2003-07-28|2021-08-31|Sonos, Inc.|Synchronizing operations among a plurality of independently clocked digital data processing devices| US11106424B2|2003-07-28|2021-08-31|Sonos, Inc.|Synchronizing operations among a plurality of independently clocked digital data processing devices| US9207905B2|2003-07-28|2015-12-08|Sonos, Inc.|Method and apparatus for providing synchrony group status information| US8234395B2|2003-07-28|2012-07-31|Sonos, Inc.|System and method for synchronizing operations among a plurality of independently clocked digital data processing devices| US8290603B1|2004-06-05|2012-10-16|Sonos, Inc.|User interfaces for controlling and manipulating groupings in a multi-zone media system| US8868698B2|2004-06-05|2014-10-21|Sonos, Inc.|Establishing a secure wireless network with minimum human intervention| US8483853B1|2006-09-12|2013-07-09|Sonos, Inc.|Controlling and manipulating groupings in a multi-zone media system| US9202509B2|2006-09-12|2015-12-01|Sonos, Inc.|Controlling and grouping in a multi-zone media system| US8788080B1|2006-09-12|2014-07-22|Sonos, Inc.|Multi-channel pairing in a media system| US11265652B2|2011-01-25|2022-03-01|Sonos, Inc.|Playback device pairing| US8086752B2|2006-11-22|2011-12-27|Sonos, Inc.|Systems and methods for synchronizing operations among a plurality of independently clocked digital data processing devices that independently source digital data| US9007871B2|2011-04-18|2015-04-14|Apple Inc.|Passive proximity detection| US9973848B2|2011-06-21|2018-05-15|Amazon Technologies, Inc.|Signal-enhancing beamforming in an augmented reality environment| US8879761B2|2011-11-22|2014-11-04|Apple Inc.|Orientation-based audio| US9084058B2|2011-12-29|2015-07-14|Sonos, Inc.|Sound field calibration using listener localization| US9184791B2|2012-03-15|2015-11-10|Blackberry Limited|Selective adaptive audio cancellation algorithm configuration| US9729115B2|2012-04-27|2017-08-08|Sonos, Inc.|Intelligently increasing the sound level of player| US9374607B2|2012-06-26|2016-06-21|Sonos, Inc.|Media playback system with guest access| US9106192B2|2012-06-28|2015-08-11|Sonos, Inc.|System and method for device playback calibration| US9690539B2|2012-06-28|2017-06-27|Sonos, Inc.|Speaker calibration user interface| US9690271B2|2012-06-28|2017-06-27|Sonos, Inc.|Speaker calibration| WO2016172593A1|2015-04-24|2016-10-27|Sonos, Inc.|Playback device calibration user interfaces| US10127006B2|2014-09-09|2018-11-13|Sonos, Inc.|Facilitating calibration of an audio playback device| US10664224B2|2015-04-24|2020-05-26|Sonos, Inc.|Speaker calibration user interface| US9668049B2|2012-06-28|2017-05-30|Sonos, Inc.|Playback device calibration user interfaces| EP2874411A4|2012-07-13|2016-03-16|Sony Corp|Information processing system and recording medium| US9258644B2|2012-07-27|2016-02-09|Nokia Technologies Oy|Method and apparatus for microphone beamforming| US8930005B2|2012-08-07|2015-01-06|Sonos, Inc.|Acoustic signatures in a playback system| KR20140029931A|2012-08-31|2014-03-11|삼성전자주식회사|Apparatas and method for intercepting echo occurrence to extinct voice of outputting speaker in an electronic device| US8988480B2|2012-09-10|2015-03-24|Apple Inc.|Use of an earpiece acoustic opening as a microphone port for beamforming applications| US9008330B2|2012-09-28|2015-04-14|Sonos, Inc.|Crossover frequency adjustments for audio speakers| KR101967917B1|2012-10-30|2019-08-13|삼성전자주식회사|Apparatas and method for recognizing a voice in an electronic device| US9525938B2|2013-02-06|2016-12-20|Apple Inc.|User voice location estimation for adjusting portable device beamforming settings| WO2014167165A1|2013-04-08|2014-10-16|Nokia Corporation|Audio apparatus| US9083782B2|2013-05-08|2015-07-14|Blackberry Limited|Dual beamform audio echo reduction| US9269350B2|2013-05-24|2016-02-23|Google Technology Holdings LLC|Voice controlled audio recording or transmission apparatus with keyword filtering| US9984675B2|2013-05-24|2018-05-29|Google Technology Holdings LLC|Voice controlled audio recording system with adjustable beamforming| CN104427049A|2013-08-30|2015-03-18|深圳富泰宏精密工业有限公司|Portable electronic device| CN104699445A|2013-12-06|2015-06-10|华为技术有限公司|Audio information processing method and device| KR102225031B1|2014-01-14|2021-03-09|엘지전자 주식회사|Terminal and operating method thereof| US9226073B2|2014-02-06|2015-12-29|Sonos, Inc.|Audio output balancing during synchronized playback| US9226087B2|2014-02-06|2015-12-29|Sonos, Inc.|Audio output balancing during synchronized playback| US9219460B2|2014-03-17|2015-12-22|Sonos, Inc.|Audio settings based on environment| US9264839B2|2014-03-17|2016-02-16|Sonos, Inc.|Playback device configuration based on proximity detection| US9516412B2|2014-03-28|2016-12-06|Panasonic Intellectual Property Management Co., Ltd.|Directivity control apparatus, directivity control method, storage medium and directivity control system| US8995240B1|2014-07-22|2015-03-31|Sonos, Inc.|Playback using positioning information| US9800981B2|2014-09-05|2017-10-24|Bernafon Ag|Hearing device comprising a directional system| US9910634B2|2014-09-09|2018-03-06|Sonos, Inc.|Microphone calibration| US9706323B2|2014-09-09|2017-07-11|Sonos, Inc.|Playback device calibration| EP3531714B1|2015-09-17|2022-02-23|Sonos Inc.|Facilitating calibration of an audio playback device| US9891881B2|2014-09-09|2018-02-13|Sonos, Inc.|Audio processing algorithm database| US9952825B2|2014-09-09|2018-04-24|Sonos, Inc.|Audio processing algorithms| US9538305B2|2015-07-28|2017-01-03|Sonos, Inc.|Calibration error conditions| KR102339798B1|2015-08-21|2021-12-15|삼성전자주식회사|Method for processing sound of electronic device and electronic device thereof| US9788109B2|2015-09-09|2017-10-10|Microsoft Technology Licensing, Llc|Microphone placement for sound source direction estimation| US9693165B2|2015-09-17|2017-06-27|Sonos, Inc.|Validation of audio calibration using multi-dimensional motion check| EP3151534A1|2015-09-29|2017-04-05|Thomson Licensing|Method of refocusing images captured by a plenoptic camera and audio based refocusing image system| US9858948B2|2015-09-29|2018-01-02|Apple Inc.|Electronic equipment with ambient noise sensing input circuitry| USD799502S1|2015-12-23|2017-10-10|Samsung Electronics Co., Ltd.|Display screen or portion thereof with animated graphical user interface| US9743207B1|2016-01-18|2017-08-22|Sonos, Inc.|Calibration using multiple recording devices| US10003899B2|2016-01-25|2018-06-19|Sonos, Inc.|Calibration with particular locations| US11106423B2|2016-01-25|2021-08-31|Sonos, Inc.|Evaluating calibration of a playback device| WO2017143067A1|2016-02-19|2017-08-24|Dolby Laboratories Licensing Corporation|Sound capture for mobile devices| US9860662B2|2016-04-01|2018-01-02|Sonos, Inc.|Updating playback device configuration information based on calibration data| US9864574B2|2016-04-01|2018-01-09|Sonos, Inc.|Playback device calibration based on representation spectral characteristics| CA2961090A1|2016-04-11|2017-10-11|TtiLimited|Modular garage door opener| AU2017251520A1|2016-04-11|2018-10-04|TtiLimited|Modular garage door opener| US9763018B1|2016-04-12|2017-09-12|Sonos, Inc.|Calibration of audio playback devices| US9860670B1|2016-07-15|2018-01-02|Sonos, Inc.|Spectral correction using spatial calibration| US9794710B1|2016-07-15|2017-10-17|Sonos, Inc.|Spatial audio correction| US10372406B2|2016-07-22|2019-08-06|Sonos, Inc.|Calibration interface| US10459684B2|2016-08-05|2019-10-29|Sonos, Inc.|Calibration of a playback device based on an estimated frequency response| GB2556093A|2016-11-18|2018-05-23|Nokia Technologies Oy|Analysis of spatial metadata from multi-microphones having asymmetric geometry in devices| US10701483B2|2017-01-03|2020-06-30|Dolby Laboratories Licensing Corporation|Sound leveling in multi-channel sound capture system| CN109036448B|2017-06-12|2020-04-14|华为技术有限公司|Sound processing method and device| CN109712629B|2017-10-25|2021-05-14|北京小米移动软件有限公司|Audio file synthesis method and device| US10778900B2|2018-03-06|2020-09-15|Eikon Technologies LLC|Method and system for dynamically adjusting camera shots| US11245840B2|2018-03-06|2022-02-08|Eikon Technologies LLC|Method and system for dynamically adjusting camera shots| US20210266683A1|2018-08-17|2021-08-26|Cochlear Limited|Spatial pre-filtering in hearing prostheses| US11206484B2|2018-08-28|2021-12-21|Sonos, Inc.|Passive speaker authentication| US10299061B1|2018-08-28|2019-05-21|Sonos, Inc.|Playback device calibration| US10942548B2|2018-09-24|2021-03-09|Apple Inc.|Method for porting microphone through keyboard| US10595129B1|2018-12-26|2020-03-17|Motorola Solutions, Inc.|Methods and apparatus for configuring multiple microphones in an electronic communication device| US10966017B2|2019-01-04|2021-03-30|Gopro, Inc.|Microphone pattern based on selected image of dual lens image capture device| US10734965B1|2019-08-12|2020-08-04|Sonos, Inc.|Audio calibration of a portable playback device|
法律状态:
2018-02-27| B25A| Requested transfer of rights approved|Owner name: GOOGLE TECHNOLOGY HOLDINGS LLC (US) | 2018-12-26| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]| 2021-05-11| B06G| Technical and formal requirements: other requirements [chapter 6.7 patent gazette]| 2021-07-06| B06I| Publication of requirement cancelled [chapter 6.9 patent gazette]|Free format text: ANULADA A PUBLICACAO CODIGO 6.7 NA RPI NO 2627 DE 11/05/2021 POR TER SIDO INDEVIDA. | 2021-10-26| B09A| Decision: intention to grant [chapter 9.1 patent gazette]| 2022-01-11| B16A| Patent or certificate of addition of invention granted [chapter 16.1 patent gazette]|Free format text: PRAZO DE VALIDADE: 20 (VINTE) ANOS CONTADOS A PARTIR DE 24/05/2011, OBSERVADAS AS CONDICOES LEGAIS. PATENTE CONCEDIDA CONFORME ADI 5.529/DF, QUE DETERMINA A ALTERACAO DO PRAZO DE CONCESSAO. |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 US12/822,081|2010-06-23| US12/822,081|US8300845B2|2010-06-23|2010-06-23|Electronic apparatus having microphones with controllable front-side gain and rear-side gain| PCT/US2011/037632|WO2011162898A1|2010-06-23|2011-05-24|Electronic apparatus having microphones with controllable front-side gain and rear-side gain| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|